A Google engineer's testimony shows how page quality is scored and confirms the existence of a popularity signal that uses Chrome data.
Google's AI function brings Gemini-powered language models right into your spreadsheet cells without any add-ons. With it, you can generate fresh text, summarize blocks of data, categorize entries, or even guess sentiments—all by typing a simple formula.
The article provides examples such as:
- *sentiment analysis* ```=AI("Is this customer feedback positive, negative, or neutral?", A2)```
- *data categorization* `=AI("Classify this expense as Travel, Office, or Other", D3)`
- *simple calculations* `=AI("Add the numbers in these cells", A1:A5)`
This article details the release of Gemma 3, the latest iteration of Google’s open-weights language model. Key improvements include **vision-language capabilities** (using a tailored SigLIP encoder), **increased context length** (up to 128k tokens for larger models), and **architectural changes for improved memory efficiency** (5-to-1 interleaved attention and removal of softcapping). Gemma 3 demonstrates superior performance compared to Gemma 2 across benchmarks and offers models optimized for various use cases, including on-device applications with the 1B model.
This document details how to run Gemma models, covering framework selection, variant choice, and running generation/inference requests. It emphasizes considering available hardware resources and provides recommendations for beginners.
Google's Gemini 2.5 Flash model is a new, faster, and more cost-effective model with adjustable 'thinking' capabilities. The article details how to use it with llm-gemini, explores pricing differences compared to Gemini 2.0 Flash, and shares example SVG outputs.
Google’s John Mueller downplayed the usefulness of LLMs.txt, comparing it to the keywords meta tag, as AI bots aren’t currently checking for the file and it opens potential for cloaking.
Google Code Assist, now powered by Gemini 2.5, shows significant improvement in coding capabilities and introduces AI agents to assist across the software development lifecycle. The article details the features available in the free, standard, and enterprise tiers, and raises questions about agent availability and practical implementation.
Google releases Gemma 3, a new iteration of their Gemma family of models. It ranges from 1B to 27B parameters, supports up to 128k tokens, accepts images and text, and supports 140+ languages. This article details its technical enhancements (longer context, multimodality, multilinguality) and provides information on inference with Hugging Face transformers, on-device deployment, and evaluation.
The Gemini API documentation provides comprehensive information about Google's Gemini models and their capabilities. It includes guides on generating content with Gemini models, native image generation, long context exploration, and generating structured outputs. The documentation offers examples in Python, Node.js, and REST for using the Gemini API, covering various applications like text and image generation, and integrating Gemini in Google AI Studio.
Google's John Mueller discusses the issue of low-effort content that may look good but lacks genuine expertise, particularly noting the use of AI-generated images as a potential signal of low-quality content.